159 research outputs found

    When Clouds become Green: the Green Open Cloud Architecture

    Get PDF
    Virtualization solutions appear as alternative approaches for companies to consolidate their operational services on a physical infrastructure, while preserving specific functionalities inside the Cloud perimeter (e.g., security, fault tolerance, reliability). These consolidation approaches are explored to propose some energy reduction while switching OFF unused computing nodes. We study the impact of virtual machines aggregation in terms of energy consumption. Some load-balancing strategies associated with the migration of virtual machines inside the Cloud infrastructures will be showed. We will present the design of a new original energy-efficient Cloud infrastructure called Green Open Cloud

    A year in the life of a large scale experimental distributed system: the Grid'5000 platform in 2008

    Get PDF
    This report presents the usage results of Grid'5000 over year 2008. Usage of the main operationnal Grid'5000 sites (Bordeaux, Lille, Lyon, Nancy, Orsay, Rennes, Sophia-Antipolis, Toulouse) is presented and analyzed

    Studying the energy consumption of data transfers in Clouds: the Ecofen approach

    Get PDF
    International audienceEnergy consumption is one of the main limiting factors for designing large scale Clouds. Evaluating the energy consumption of Clouds networking architectures and providing multi-level views required by providers and users, is a challenging issue. In this paper, we show how to evaluate and understand network choices (protocols, topologies) in terms of contributions to the energy consumption of the global Cloud infrastructures. By applying the ECOFEN model (Energy Consumption mOdel For End-to-end Networks) and the corresponding simulation framework, we profile and analyze the energy consumption of data transfers in Clouds

    Energy-efficient bandwidth reservation for bulk data transfers in dedicated wired networks

    Get PDF
    International audienceThe ever increasing number of Internet connected end-hosts call for high performance end-to-end networks leading to an increase in the energy consumed by the networks. Our work deals with the energy consumption issue in dedicated network with bandwidth provisionning and in-advance reservations of network equipments and bandwidth for Bulk Data transfers. First, we propose an end-to-end energy cost model of such networks which described the energy consumed by a transfer for all the crossed equipments. This model is then used to develop a new energy-aware framework adapted to Bulk Data Transfers over dedicated networks. This framework enables switching off unused network portions during certain periods of time to save energy. This framework is also endowed with prediction algorithms to avoid useless switching off and with adaptive scheduling management to optimize the energy used by the transfers. 1 Introductio

    Experimental analysis of vectorized instructions impact on energy and power consumption under thermal design power constraints

    Get PDF
    International audienceVectorized instructions were introduced to improve the performance of applications. However, they come with an increase in the power consumption cost. As a consequence, processors are designed to limit the frequency of the processors when such instructions are used in order to maintain the thermal design power.In this paper, we study and compare the impact of thermal design power and SIMD instructions on performance, power and energy consumption of processors and memory. The study is performed on three different architectures providing different characteristics and four applications with different profiles (including one application with different phases, each phase having a different profile).The study shows that, because of processor frequency, performance and power consumption are strongly related under thermal design power. It also shows that AVX512 has unexpected behavior regarding processor power consumption, while DRAM power consumption is impacted by SIMD instructions because of the generated memory throughput

    On the energy footprint of I/O management in Exascale HPC systems

    Get PDF
    International audienceThe advent of unprecedentedly scalable yet energy hungry Exascale supercomputers poses a major challenge in sustaining a high performance-per-watt ratio. With I/O management acquiring a crucial role in supporting scientific simulations, various I/O management approaches have been proposed to achieve high performance and scalability. However, the details of how these approaches affect energy consumption have not been studied yet. Therefore, this paper aims to explore how much energy a supercomputer consumes while running scientific simulations when adopting various I/O management approaches. In particular, we closely examine three radically different I/O schemes including time partitioning, dedicated cores, and dedicated nodes. To do so, we implement the three approaches within the Damaris I/O middleware and perform extensive experiments with one of the target HPC applications of the Blue Waters sustained-petaflop supercomputer project: the CM1 atmospheric model. Our experimental results obtained on the French Grid’5000 platform highlight the differences among these three approaches and illustrate in which way various configurations of the application and of the system can impact performance and energy consumption. Moreover, we propose and validate a mathematical model that estimates the energy consumption of a HPC simulation under different I/O approaches. Our proposed model gives hints to pre-select the most energy-efficient I/O approach for a particular simulation on a particular HPC system and therefore provides a step towards energy-efficient HPC simulations in Exascale systems. To the best of our knowledge, our work provides the first in-depth look into the energy-performance tradeoffs of I/O management approaches

    Opportunistic Scheduling in Clouds Partially Powered by Green Energy

    Get PDF
    International audienceThe fast growth of demand for computing and storage resources in data centers has considerably increased their energy consumption. Improving the utilization of data center resources and integrating renewable energy, such as solar and wind, has been proposed to reduce both the brown energy consumption and carbon footprint of the data centers. In this paper, we propose a novel framework oPportunistic schedulIng broKer infrAstructure (PIKA) to save energy in small mono-site data centers. In order to reduce the brown energy consumption, PIKA integrates resource overcommit techniques that help to minimize the number of powered-on Physical Machines (PMs). On the other hand, PIKA dynamically schedules the jobs and adjusts the number of powered-on PMs to match the variable renewable energy supply. Our simulations with a real-world job workload and solar power traces demonstrate that PIKA saves brown energy consumption by up to 44.9% compared to a typical scheduling algorithm

    Energy-Aware Massively Distributed Cloud Facilities: The DISCOVERY Initiative

    Get PDF
    International audienceInstead of the current trend consisting of building larger and larger data centers (DCs) in few strategic locations, the DISCOVERY initiative proposes to leverage any network point of presences (PoP, i.e., a small or medium-sized network center) available through the Internet. The key idea is to demonstrate a widely distributed Cloud platform that can better match the geographical dispersal of users and of renewable energy sources. This involves radical changes in the way resources are managed, but leveraging computing resources around the end-users will enable to deliver a new generation of highly efficient and sustainable Utility Computing (UC) platforms, thus providing a strong alternative to the actual Cloud model based on mega DCs (i.e., DCs composed of tens of thousands resources). This poster will present the DISCOVERY initiative efforts towards achieving energy-aware massively distributed cloud facilities. To satisfy the escalating demand for Cloud Computing (CC) resources while realizing economy of scale, the production of computing resources is concentrated in mega data centers (DCs) of ever-increasing size, where the number of physical resources that one DC can host is limited by the capacity of its energy supply and its cooling system. To meet these critical needs in terms of energy supply and cooling, the current trend is toward building DCs in regions with abundant and affordable electricity supplies or in regions close to the polar circle to leverage free cooling techniques [1]. However, concentrating Mega-DCs in only few attractive places implies different issues. First, a disaster in these areas would be dramatic for IT services the DCs host as the con-nectivity to CC resources would not be guaranteed. Second, in addition to jurisdiction concerns, hosting computing resources in a few locations leads to useless network overheads to reach each DC. Such overheads can prevent the adoption of the UC paradigm by several kinds of applications such as mobile computing or big data ones

    Balancing the use of batteries and opportunistic scheduling policies for maximizing renewable energy consumption in a Cloud data center

    Get PDF
    International audienceThe fast growth of cloud computing considerably increases the energy consumption of cloud infrastructures, especially , data centers. To reduce brown energy consumption and carbon footprint, renewable energy such as solar/wind energy is considered recently to supply new green data centers. As renewable energy is intermittent and fluctuates from time to time, this paper considers two fundamental approaches for improving the usage of renewable energy in a small/medium-sized data center. One approach is based on opportunistic scheduling: more jobs are performed when renewable energy is available. The other approach relies on Energy Storage Devices (ESDs), which store renewable energy surplus at first and then, provide energy to the data center when renewable energy becomes unavailable. In this paper, we explore these two means to maximize the utilization of on-site renewable energy for small data centers. By using real-world job workload and solar energy traces, our experimental results show the energy consumption with varying battery size and solar panel dimensions for opportunistic scheduling or ESD-only solution. The results also demonstrate that opportunistic scheduling can reduce the demand for ESD capacity. Finally, we find an intermediate solution mixing both approaches in order to achieve a balance in all aspects, implying minimizing the renewable energy losses. It also saves brown energy consumption by up to 33% compared to ESD-only solution

    Incentives for Mobile Cloud Environments through P2P Auctions

    Get PDF
    International audienceMobile Cloud Computing is a form of collaborative decentralized Cloud which allows mobile devices to unload computation to a local Cloud formed by mobile and static devices. Mobile Cloud Computing provides a better service to latency sensitive applications, due to its physical proximity to the VM host. However, in these systems, the problem of free riding users becomes more acute, for the heterogeneity of devices (from smartphones to private servers) makes the gap of contributed resources much larger. In this work, we analyze the use of incentives for Mobile Clouds, and propose a new auction system adapted to the high dynamism and heterogeneity of these systems. We compare our solution to other existing auctions systems in a Mobile Cloud use case, and show the suitability of our solution
    • 

    corecore